13,986 research outputs found

    Shock and Release Temperatures in Molybdenum

    Full text link
    Shock and release temperatures in Mo were calculated, taking account of heating from plastic flow predicted using the Steinberg-Guinan model. Plastic flow was calculated self-consistently with the shock jump conditions: this is necessary for a rigorous estimate of the locus of shock states accessible. The temperatures obtained were significantly higher than predicted assuming ideal hydrodynamic loading. The temperatures were compared with surface emission spectrometry measurements for Mo shocked to around 60GPa and then released into vacuum or into a LiF window. Shock loading was induced by the impact of a planar projectile, accelerated by high explosive or in a gas gun. Surface velocimetry showed an elastic wave at the start of release from the shocked state; the amplitude of the elastic wave matched the prediction to around 10%, indicating that the predicted flow stress in the shocked state was reasonable. The measured temperatures were consistent with the simulations, indicating that the fraction of plastic work converted to heat was in the range 70-100% for these loading conditions

    A Subset of the CERN Virtual Machine File System: Fast Delivering of Complex Software Stacks for Supercomputing Resources

    Full text link
    Delivering a reproducible environment along with complex and up-to-date software stacks on thousands of distributed and heterogeneous worker nodes is a critical task. The CernVM-File System (CVMFS) has been designed to help various communities to deploy software on worldwide distributed computing infrastructures by decoupling the software from the Operating System. However, the installation of this file system depends on a collaboration with system administrators of the remote resources and an HTTP connectivity to fetch dependencies from external sources. Supercomputers, which offer tremendous computing power, generally have more restrictive policies than grid sites and do not easily provide the mandatory conditions to exploit CVMFS. Different solutions have been developed to tackle the issue, but they are often specific to a scientific community and do not deal with the problem in its globality. In this paper, we provide a generic utility to assist any community in the installation of complex software dependencies on supercomputers with no external connectivity. The approach consists in capturing dependencies of applications of interests, building a subset of dependencies, testing it in a given environment, and deploying it to a remote computing resource. We experiment this proposal with a real use case by exporting Gauss-a Monte-Carlo simulation program from the LHCb experiment-on Mare Nostrum, one of the top supercomputers of the world. We provide steps to encapsulate the minimum required files and deliver a light and easy-to-update subset of CVMFS: 12.4 Gigabytes instead of 5.2 Terabytes for the whole LHCb repository

    Parallélisation et optimisation d'un simulateur de morphogénèse d'organes. Application aux éléments du rein

    Get PDF
    Depuis plusieurs dizaines d années, la modélisation du vivant est un enjeu majeur qui nécessite de plus en plus de travaux dans le domaine de la simulation. En effet, elle ouvre la porte à toute une palette d applications: l aide à la décision en environnement et en écologie, l aide à l enseignement, l aide à la décision pour les médecins, l aide à la recherche de nouveaux traitements pharmaceutiques et la biologie dite prédictive , etc. Avant de pouvoir aborder un problème, il est nécessaire de pouvoir modéliser de façon précise le système biologique concerné en précisant bien les questions auxquelles devra répondre le modèle. La manipulation et l étude de systèmes complexes, les systèmes biologiques en étant l archétype, pose, de façon générale, des problèmes de modélisation et de simulation. C est dans ce contexte que la société Integrative BioComputing (IBC) développe depuis le début des années 2000 un prototype d une Plateforme Générique de Modélisation et de Simulation(la PGMS) dont le but est de fournir un environnement pour modéliser et simuler plus simplement les processus et les fonctions biologiques d un organisme complet avec les organes le composant. La PGMS étant une plateforme générique encore en phase de développement, elle ne possédait pas les performances nécessaires pour permettre de réaliser la modélisation et la simulation d éléments importants dans des temps suffisamment courts. Il a donc été décidé, afin d améliorer drastiquement les performances de la PGMS, de paralléliser et d optimiser l implémentation de celle-ci; le but étant de permettre la modélisation et la simulation d organes complets dans des temps acceptables. Le travail réalisé au cours de cette thèse a donc consisté à traiter différents aspects de la modélisation et de la simulation de systèmes biologiques afin d accélérer les traitements de ceux-ci. Le traitement le plus gourmand en termes de temps de calcul lors de l exécution de la PGMS, le calcul des champs physicochimiques, a ainsi fait l objet d une étude de faisabilité de sa parallélisation. Parmi les différentes architectures disponibles pour paralléliser une telle application, notre choix s est porté sur l utilisation de GPU (Graphical Processing Unit) à des fins de calculs généralistes aussi couramment appelé GPGPU (General-Purpose computation on Graphics Processing Units). Ce choix a été réalisé du fait, entre autres, du coût réduit du matériel et de sa très grande puissance de calcul brute qui en fait une des architectures de parallélisation les plus accessibles du marché. Les résultats de l étude de faisabilité étant particulièrement concluant, la parallélisation du calcul des champs a ensuite été intégrée à la PGMS. En parallèle, nous avons également mené des travaux d optimisations pour améliorer les performances séquentielles de la PGMS. Le résultat de ces travaux est une augmentation de la vitesse d exécution d un facteur 18,12x sur les simulations les plus longues (passant de 16 minutes pour la simulation non optimisée utilisant un seul cœur CPU à 53 secondes pour la version optimisée utilisant toujours un seul cœur CPU mais aussi un GPU GTX500). L autre aspect majeur traité dans ces travaux a été d améliorer les performances algorithmiques pour la simulation d automates cellulaires en trois dimensions. En effet, ces derniers permettent aussi bien de simuler des comportements biologiques que d implémenter des mécanismes de modélisation tels que les interactions multi-échelles. Le travail de recherche s est essentiellement effectué sur des propositions algorithmiques originales afin d améliorer les simulations réalisées par IBC sur la PGMS. L accélération logicielle, à travers l implémentation de l algorithme Hash Life en trois dimensions, et la parallélisation à l aide de GPGPU ont été étudiées de façon concomitante et ont abouti à des gains très significatifs en temps de calcul.For some years, living matter modeling has been a major challenge which needs more and more research in the simulation field. Indeed, the use of models of living matter have multiple applications: decision making aid in environment or ecology, teaching tools, decision making aid for physician, research aid for new pharmaceutical treatment and predictive biology, etc. But before being able to tackle all these issues, the development of a correct model, able to give answer about specific questions, is needed. Working with complex systems biologic system being the archetype of them raises various modeling and simulation issues. It is in this context that the Integrative BioComputing (IBC) company have been elaborating, since the early 2000s, the prototype of a generic platform for modeling and simulation (PGMS). Its goal is to provide a platform used to easily model and simulate biological process of a full organism, including its organs. Since the PGMS was still in its development stage at the start of my PhD, the application performance prevented the modeling and simulation of large biological components in an acceptable time. Therefore, it has been decide to optimize and parallelize its computation to increase significantly the PGMS performances. The goal was to enable the use of the PGMS to model and simulate full organs in acceptable times. During my PhD, I had to work on various aspects of the modeling and simulation of biological systems to increase their process speed. Since the most costly process during the PGMS execution was the computation of chemical fields, I had to study the opportunity of parallelizing this process. Among the various hardware architectures available to parallelize this application, we chose to use graphical processing units for general purpose computation (GPGPUs). This choice was motivated, beside other reasons, by the low cost of the hardware compared to its massive computation power, making it one of the most affordable parallel architecture on the market. Since the results of the initial feasibility study were conclusive, the parallelization of the fields computation has been integrated into the PGMS. In parallel to this work, I also worked on optimizing the sequential performance of the application. All these works lead to an increase of the software performances achieving a speed-up of 18.12x for the longest simulation (from 16 minutes for the non-optimized version with one CPU core to 53 seconds for the optimized version, still using only one core on the CPU but also a GPU GTX500). The other major aspect of my work was to increase the algorithmic performances for the simulation of three-dimensional cellular automata. In fact, these automata allow the simulation of biological behavior as they can be used to implement various mechanisms of a model such as multi-scale interactions. The research work consisted mainly in proposing original algorithms to improve the simulation provided by IBC on the PGMS. The sequential speed increase, thanks to the three-dimensional Hash Life implementation, and the parallelization on GPGPU has been studied together and achieved major computation time improvement.CLERMONT FD-Bib.électronique (631139902) / SudocSudocFranceF

    Contribution de l'ingénierie dirigée par les modèles à la conception de modèles grande culture

    Get PDF
    Cette thèse, à caractère industriel, vise à répondre à une problématique de production de l entreprise ITK. Par la mise en oeuvre des techniques de l ingénierie dirigée par les modèles, nous proposons un environnement de modélisation et de simulation pour la croissance des plantes. Outre sa facilité d utilisation pour les agronomes, le prototype obtenu permet la génération automatique de code Java des modèles à intégrer dans des outils d aide à la décision exécutés sur une plateforme Java Enterprise Edition.This PhD thesis has an industrial purpose. It is motivated by a software production issue met within the ITK company. Using the techniques provided by model-driven engineering, we propose a modeling and simulation environment dedicated to plant growth. The prototype achieved proves to be easy to use for crop modelers. It is also enhanced with a Java code generation feature. The models obtained from this code generation are designed to be integrated into decision support systems running on a Java Entreprise Edition platform.CLERMONT FD-Bib.électronique (631139902) / SudocSudocFranceF

    The impact of two coupled cirrus microphysics-radiation parameterizations on the temperature and specific humidity biases in the tropical tropopause layer in a climate model

    Get PDF
    The impact of two different coupled cirrus microphysics-radiation parameterizations on the zonally averaged temperature and humidity biases in the tropical tropopause layer (TTL) of a Met Office climate model configuration is assessed. One parameterization is based on a linear coupling between a model prognostic variable, the ice mass mixing ratio, qi, and the integral optical properties. The second is based on the integral optical properties being parameterized as functions of qi and temperature, Tc, where the mass coefficients (i.e. scattering and extinction) are parameterized as nonlinear functions of the ratio between qi and Tc. The cirrus microphysics parameterization is based on a moment estimation parameterization of the particle size distribution (PSD), which relates the mass moment (i.e. second moment if mass is proportional to size raised to the power of 2 ) of the PSD to all other PSD moments through the magnitude of the second moment and Tc. This same microphysics PSD parameterization is applied to calculate the integral optical properties used in both radiation parameterizations and, thus, ensures PSD and mass consistency between the cirrus microphysics and radiation schemes. In this paper, the temperature-non-dependent and temperature-dependent parameterizations are shown to increase and decrease the zonally averaged temperature biases in the TTL by about 1 K, respectively. The temperature-dependent radiation parameterization is further demonstrated to have a positive impact on the specific humidity biases in the TTL, as well as decreasing the shortwave and longwave biases in the cloudy radiative effect. The temperature-dependent radiation parameterization is shown to be more consistent with TTL and global radiation observations

    Boundary States and Black Hole Entropy

    Full text link
    Black hole entropy is derived from a sum over boundary states. The boundary states are labeled by energy and momentum surface densities, and parametrized by the boundary metric. The sum over state labels is expressed as a functional integral with measure determined by the density of states. The sum over metrics is expressed as a functional integral with measure determined by the universal expression for the inverse temperature gradient at the horizon. The analysis applies to any stationary, nonextreme black hole in any theory of gravitational and matter fields.Comment: 4 pages, Revte

    Qualification of a Null Lens Using Image-Based Phase Retrieval

    Get PDF
    In measuring the figure error of an aspheric optic using a null lens, the wavefront contribution from the null lens must be independently and accurately characterized in order to isolate the optical performance of the aspheric optic alone. Various techniques can be used to characterize such a null lens, including interferometry, profilometry and image-based methods. Only image-based methods, such as phase retrieval, can measure the null-lens wavefront in situ - in single-pass, and at the same conjugates and in the same alignment state in which the null lens will ultimately be used - with no additional optical components. Due to the intended purpose of a Dull lens (e.g., to null a large aspheric wavefront with a near-equal-but-opposite spherical wavefront), characterizing a null-lens wavefront presents several challenges to image-based phase retrieval: Large wavefront slopes and high-dynamic-range data decrease the capture range of phase-retrieval algorithms, increase the requirements on the fidelity of the forward model of the optical system, and make it difficult to extract diagnostic information (e.g., the system F/#) from the image data. In this paper, we present a study of these effects on phase-retrieval algorithms in the context of a null lens used in component development for the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission. Approaches for mitigation are also discussed

    System and Method for Null-Lens Wavefront Sensing

    Get PDF
    A method of measuring aberrations in a null-lens including assembly and alignment aberrations. The null-lens may be used for measuring aberrations in an aspheric optic with the null-lens. Light propagates from the aspheric optic location through the null-lens, while sweeping a detector through the null-lens focal plane. Image data being is collected at locations about said focal plane. Light is simulated propagating to the collection locations for each collected image. Null-lens aberrations may extracted, e.g., applying image-based wavefront-sensing to collected images and simulation results. The null-lens aberrations improve accuracy in measuring aspheric optic aberrations
    • …
    corecore